Experiments on Regularizing MLP

نویسنده

  • Jouko Lampinen
چکیده

In this contribution we present results of using possibly inaccurate knowledge of model derivatives as part of the training data for a multilayer perceptron network (MLP). Even simple constraints ooer signiicant improvements and the resulting models give better prediction performance than traditional data driven MLP models.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularizing Recurrent Networks - On Injected Noise and Norm-based Methods

Advancements in parallel processing have lead to a surge in multilayer perceptrons’ (MLP) applications and deep learning in the past decades. Recurrent Neural Networks (RNNs) give additional representational power to feedforward MLPs by providing a way to treat sequential data. However, RNNs are hard to train using conventional error backpropagation methods because of the difficulty in relating...

متن کامل

Feasibility of City Development Strategy in Enabling and Regularizing the Informal Settlements, Tabriz metropolis, district 1

Nowadays, informal settlements has become a common challenge in many of cities particularly in Metropolises .On one hand, it is a spatial manifestation of social and economical inequalities and injustice at the local, regional and national levels. On the other hand it is the result of urban planning deficiency, absence of citizenship and inattention to social and economical needs of low income ...

متن کامل

Regularizing active set method for nonnegatively constrained ill-posed multichannel image restoration problem.

In this paper, we consider the nonnegatively constrained multichannel image deblurring problem and propose regularizing active set methods for numerical restoration. For image deblurring problems, it is reasonable to solve a regularizing model with nonnegativity constraints because of the physical meaning of the image. We consider a general regularizing l(p)-l(q) model with nonnegativity constr...

متن کامل

Horn: A System for Parallel Training and Regularizing of Large-Scale Neural Networks

I introduce a new distributed system for effective training and regularizing of LargeScale Neural Networks on distributed computing architectures. The experiments demonstrate the effectiveness of flexible model partitioning and parallelization strategies based on neuron-centric computation model and the parallel ensemble technique, with an implementation of the collective and parallel dropout n...

متن کامل

Comparison of MLP and GMM Classifiers for Face Verification on XM2VTS

We compare two classifier approaches, namely classifiers based on Multi Layer Perceptrons (MLPs) and Gaussian Mixture Models (GMMs), for use in a face verification system. The comparison is carried out in terms of performance, robustness and practicability. Apart from structural differences, the two approaches use different training criteria; the MLP approach uses a discriminative criterion, wh...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997